Session: How AI Systems Accidentally Manipulate Humans — and Why It Backfires
AI products are scaling fast. Trust is not.
As AI moves from experimentation to monetized, enterprise-scale deployment, a critical risk is emerging: most systems optimize for engagement, conversion, or efficiency — but not for long-term human trust.
In high-stakes domains like commerce, finance, healthcare, and enterprise software, this misalignment creates subtle but compounding consequences. Optimization loops amplify persuasion. Confident model outputs trigger authority bias. Reinforcement systems reward short-term gains while quietly eroding credibility.
This session introduces a practical framework for designing AI systems that balance three forces: model capability, business incentives, and human behavioral realities.
Bio
Harshita Lall is a product strategist operating at the intersection of AI, commerce, and monetization systems. She has led AI-driven retail media and commerce initiatives at the world's largest retailer and startups alike, where she focuses on building scalable, revenue-impacting AI products in high-stakes environments.
Her work centers on aligning model capability with business incentives and human behavior — designing systems that drive measurable growth while mitigating long-term trust and strategic risk. Harshita is also an author and active communicator on product leadership, technology strategy, and thoughtful AI adoption. She speaks frequently on how to align AI innovation with long-term trust and business value.